Goto

Collaborating Authors

 Autonomous Vehicles


Israel kills municipal worker at water well in south Lebanon: Mayor

Al Jazeera

An Israeli drone strike that has killed one person in a south Lebanon village targeted a municipal worker operating a water well, not a Hezbollah member as the Israeli military had claimed, according to the Mayor of Nabatieh al-Fawqa Zein Ali Ghandour. Ghandour said on Thursday that the victim, Mahmoud Hasan Atwi, was "martyred" while on his official duty of trying to provide water for the people of the town. "We condemn in the strongest terms this blatant aggression against civilians and civilian infrastructure as well as the Lebanese state and its institutions," the mayor said in a statement. Ghandour called on the international community to press the issue and put an end to Israeli violations. The Israeli military had claimed that it fired at a "Hezbollah operative" who it said was "rehabilitating a site" used by the group.


Expert-level protocol translation for self-driving labs

Neural Information Processing Systems

Recent development in Artificial Intelligence (AI) models has propelled their application in scientific discovery, but the validation and exploration of these discoveries require subsequent empirical experimentation. The concept of self-driving laboratories promises to automate and thus boost the experimental process following AI-driven discoveries. However, the transition of experimental protocols, originally crafted for human comprehension, into formats interpretable by machines presents significant challenges, which, within the context of specific expert domain, encompass the necessity for structured as opposed to natural language, the imperative for explicit rather than tacit knowledge, and the preservation of causality and consistency throughout protocol steps. Presently, the task of protocol translation predominantly requires the manual and labor-intensive involvement of domain experts and information technology specialists, rendering the process time-intensive. To address these issues, we propose a framework that automates the protocol translation process through a three-stage workflow, which incrementally constructs Protocol Dependence Graphs (PDGs) that approach structured on the syntax level, completed on the semantics level, and linked on the execution level. Quantitative and qualitative evaluations have demonstrated its performance at par with that of human experts, underscoring its potential to significantly expedite and democratize the process of scientific discovery by elevating the automation capabilities within self-driving laboratories.


Risk-Driven Design of Perception Systems Department of Aeronautics and Astronautics Department of Aeronautics and Astronautics Stanford University

Neural Information Processing Systems

Modern autonomous systems rely on perception modules to process complex sensor measurements into state estimates. These estimates are then passed to a controller, which uses them to make safety-critical decisions. It is therefore important that we design perception systems to minimize errors that reduce the overall safety of the system. We develop a risk-driven approach to designing perception systems that accounts for the effect of perceptual errors on the performance of the fullyintegrated, closed-loop system. We formulate a risk function to quantify the effect of a given perceptual error on overall safety, and show how we can use it to design safer perception systems by including a risk-dependent term in the loss function and generating training data in risk-sensitive regions. We evaluate our techniques on a realistic vision-based aircraft detect and avoid application and show that risk-driven design reduces collision risk by 37 % over a baseline system.


The Download: the next anti-drone weapon, and powering AI's growth

MIT Technology Review

Imagine: China deploys hundreds of thousands of autonomous drones in the air, on the sea, and under the water--all armed with explosive warheads or small missiles. These machines descend in a swarm toward military installations on Taiwan and nearby US bases, and over the course of a few hours, a single robotic blitzkrieg overwhelms the US Pacific force before it can even begin to fight back. The proliferation of cheap drones means just about any group with the wherewithal to assemble and launch a swarm could wreak havoc, no expensive jets or massive missile installations required. The US armed forces are now hunting for a solution--and they want it fast. Every branch of the service and a host of defense tech startups are testing out new weapons that promise to disable drones en masse.


This giant microwave may change the future of war

MIT Technology Review

While the US has precision missiles that can shoot these drones down, they don't always succeed: A drone attack killed three US soldiers and injured dozens more at a base in the Jordanian desert last year. And each American missile costs orders of magnitude more than its targets, which limits their supply; countering thousand-dollar drones with missiles that cost hundreds of thousands, or even millions, of dollars per shot can only work for so long, even with a defense budget that could reach a trillion dollars next year. The US armed forces are now hunting for a solution--and they want it fast. Every branch of the service and a host of defense tech startups are testing out new weapons that promise to disable drones en masse. There are drones that slam into other drones like battering rams; drones that shoot out nets to ensnare quadcopter propellers; precision-guided Gatling guns that simply shoot drones out of the sky; electronic approaches, like GPS jammers and direct hacking tools; and lasers that melt holes clear through a target's side.


3dbb8b6b5576b85afb3037e9630812dc-Paper-Conference.pdf

Neural Information Processing Systems

The reliability of driving perception systems under unprecedented conditions is crucial for practical usage. Latest advancements have prompted increasing interest in multi-LiDAR perception. However, prevailing driving datasets predominantly utilize single-LiDAR systems and collect data devoid of adverse conditions, failing to capture the complexities of real-world environments accurately. Addressing these gaps, we proposed Place3D, a full-cycle pipeline that encompasses Li-DAR placement optimization, data generation, and downstream evaluations. Our framework makes three appealing contributions. 1) To identify the most effective configurations for multi-LiDAR systems, we introduce the Surrogate Metric of the Semantic Occupancy Grids (M-SOG) to evaluate LiDAR placement quality.



Supplementary Material for NAVSIM: Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking 1,5

Neural Information Processing Systems

In this supplementary document, we first collect an overview of inconsistencies in existing planning benchmarks. Next, we provide details on the NAVSIM implementation and the dataset used in our study. Moreover, we include descriptions of the baselines and present supplementary results. Finally, we provide information on the 2024 NAVSIM challenge and discuss the broader impact of our work.


NAVSIM: Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking 1,5

Neural Information Processing Systems

Benchmarking vision-based driving policies is challenging. On one hand, openloop evaluation with real data is easy, but these results do not reflect closedloop performance. On the other, closed-loop evaluation is possible in simulation, but is hard to scale due to its significant computational demands.


Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline

Neural Information Processing Systems

Current end-to-end autonomous driving methods either run a controller based on a planned trajectory or perform control prediction directly, which have spanned two separately studied lines of research. Seeing their potential mutual benefits to each other, this paper takes the initiative to explore the combination of these two well-developed worlds. Specifically, our integrated approach has two branches for trajectory planning and direct control, respectively. The trajectory branch predicts the future trajectory, while the control branch involves a novel multi-step prediction scheme such that the relationship between current actions and future states can be reasoned. The two branches are connected so that the control branch receives corresponding guidance from the trajectory branch at each time step. The outputs from two branches are then fused to achieve complementary advantages. Our results are evaluated in the closed-loop urban driving setting with challenging scenarios using the CARLA simulator. Even with a monocular camera input, the proposed approach ranks first on the official CARLA Leaderboard, outperforming other complex candidates with multiple sensors or fusion mechanisms by a large margin.